I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly.
Here is my code for saving the entity.
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ContentView: View {
var body: some View {
VStack {
ToggleImmersiveSpaceButton()
Button("Save Entity") {
Task {
// if let entity = await buildEntityHierarchy(from: urdfPath) {
let type = UTType.realityFile
let filename = "testing.\(type.preferredFilenameExtension ?? "bin")"
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent(filename)
do {
let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05)
let material = SimpleMaterial(color: .blue, isMetallic: true)
let modelComponent = ModelComponent(mesh: mesh, materials: [material])
let entity = Entity()
entity.components.set(modelComponent)
print("Writing \(fileURL)")
try await entity.write(to: fileURL)
} catch {
print("Failed writing")
}
}
}
}
.padding()
}
}
Every time I press "Save Entity", I see a warning similar to:
Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality
Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id.
When I open the immersive space, I attempt to load the same file:
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ImmersiveView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
guard
let type = UTType.realityFile.preferredFilenameExtension
else {
return
}
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent("testing.\(type)")
guard FileManager.default.fileExists(atPath: fileURL.path) else {
print("❌ File does not exist at path: \(fileURL.path)")
return
}
if let entity = try? await Entity(contentsOf: fileURL) {
content.add(entity)
}
}
}
}
I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are:
Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.
Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry.
AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.)
The order of operations to make this happen:
Launch app
Press "Save Entity" to save the entity
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
Also
Launch app, the entity should still be save from last time the app ran
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is there a way to render stereoscopic (left/right) images in a 2d plane that resides in a swiftUI view?
I know this is possible in realityKit shaders, and in immersive metal composits, but is it possible via swiftUI shaders, CAMetalLayer, etc?
I'd like to draw a 2d window with standard UI chrome (resize, move etc) that displays stereoscopic content on the flat plane of the window.
This is a very exciting feature in 26.4 beta. But from the document, it seems can only integrate with NVIDIA CloudXR™ SDK.
I'm wondering if it's possible to use this tool to stream immersive video from Mac to Vision Pro?
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation.
Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component?
visionOS 26 Beta 2
Build from macOS 26 Beta 2 and Xcode 26 Beta 2
Attached is replicable sample code, I have tried this in other projects with the same results.
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) {
ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)])
immersiveContentEntity.position.y = 1
immersiveContentEntity.position.z = -0.5
var mc = ManipulationComponent()
mc.releaseBehavior = .stay
immersiveContentEntity.components.set(mc)
content.add(immersiveContentEntity)
}
}
}
I am a developer working on developing a space journal application. During the development process, I encountered several crucial strategic and technical decisions, and I would like to hear the experiences of those who have gone through similar situations. Here are the simplified versions of several questions I have.
Resource allocation: Which problem should I address first?
Design direction: In terms of interaction and UI design, how should I balance "immersion" and "usability"?
Market selection: Was it easier for a business to survive in the early stages as a B2B or B2C entity?
Cost estimation: How can I reasonably present to my investors the development costs of this project?
In order to avoid relying solely on intuition in my decisions, I created a short questionnaire, hoping to gather more structured opinions from my colleagues. If you are also exploring VisionOS, I sincerely hope you can take a few minutes to fill it out. The results are extremely important to me, and I would be more than happy to share the final summary findings with you.
Hi, is there a way to track feet in a visionOS app in an immersive space? I want the whole body to be visible in VR, and I want to know if the user touches an object with their foot.
Hi, I wonder if there's something that can be configured to force Xcode (and preferably MVD too) to use Ethernet connection between Mac Mini and Apple Vision Pro (over a USB hub, not a direct USB connection)?
If I connect AVP to Mac directly via USB, the bridge gets created and both MVD and Xcode default to it, which is great because of higher speed and lower latency.
My problem is that I work with external camera, so I can have either the camera, or the Mac connection, but not both. I tried to solve that by plugging in a small active USB hub, so the strap and camera are connected to it, plus it has Ethernet adapter, which is plugged into Mac port. I tried with internet sharing on Mac - AVP has internet access, I can ping AVP from Mac, but Xcode and MVD still use wifi. I tried to manually configure bridge without internet sharing - same effect. I tried to make the bridge highest priority connection - nothing changed. I tried to force routing to AVP IP over the bridge - nothing (and it seems that my routing entry went missing after some time and was replaced by "use wifi interface").
So - is there something more I can do to make at least Xcode go over the cable? Debugging over wifi often takes forever.
I have already add liscene and entitlement for Passthrough in screen capture, but still get black background instaed of real world. Do you have any idea ? I've been stuck with this question for a long time 😢
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below.
Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart.
(FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location
(FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system
(FB21594646) pushWindow interacts poorly with the window bar close app option
(FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked
(FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes
(FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot
(FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window
(FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close
(FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior
I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them.
Questions:
I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds?
I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API?
I'd appreciate any guidance Apple engineers could provide. Thank you.
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should:
Show for 3 seconds when entering immersive mode
Auto-hide after 3 seconds,
Reappear when user taps anywhere (using SpatialTapGesture).
Buttons should respond to gaze + pinch interaction
The Problem:
When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking).
Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding.
With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction.
Code:
App.swift:
@main
struct MyApp: App {
@StateObject private var model = AppModel()
var body: some Scene {
WindowGroup(id: "MainWindow") {
ContentView()
.environmentObject(model)
}
.defaultSize(width: 900, height: 700)
.windowResizability(.contentSize)
.windowStyle(.plain) // <-- This causes the interaction issue
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
.environmentObject(model)
}
}
}
ContentView.swift (simplified):
struct ContentView: View {
@EnvironmentObject var model: AppModel
@State private var isMenuVisible: Bool = true
var body: some View {
VStack {
if model.isImmersiveViewActive {
if isMenuVisible {
// This menu's buttons don't respond to gaze+pinch
immersiveControlMenu
}
} else {
mainMenuButtons
}
}
.glassBackgroundEffect()
}
private var immersiveControlMenu: some View {
HStack {
Button("Exit") {
exitImmersiveSpace()
}
.buttonStyle(.bordered) // Also tried .plain, same issue
}
.padding()
.glassBackgroundEffect()
}
}
ImmersiveView.swift:
struct ImmersiveView: View {
@EnvironmentObject var model: AppModel
var body: some View {
RealityView { content in
// Panorama sphere
let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material])
content.add(sphere)
// Tap detector for menu toggle
let tapDetector = Entity()
tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)]))
tapDetector.components.set(InputTargetComponent())
content.add(tapDetector)
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { _ in
model.shouldShowMenu = true
}
)
}
}
Environment:
Xcode 26.2
visionOS 26.3
Vision Pro device
Questions:
Is .windowStyle(.plain) expected to affect button interaction behavior?
What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity?
Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS?
Thank you for any guidance!
Hi everyone,
I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world.
Observed behavior:
When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected).
However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter.
As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values.
So effectively:
Recenter happens
Visual position is correct
Programmatic position remains stale
First gesture causes the position to “snap” to the correct updated value
Questions:
Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button?
Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture?
Is this expected behavior due to deferred/lazy transform updates in RealityKit?
Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs.
Any guidance or best-practice patterns for handling this would be appreciated.
Thanks!
Hi everyone,
I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture:
Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions).
Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive.
Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects.
Has anyone found a clean way to drag objects while maintaining solid collisions.
Thanks for your help!
Hello,
I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing.
I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes.
So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)?
Thanks in advance!
I got more than 1 TB Immersive videos, and I want to play from them. Is there a way I can connect a ssd to Vision Pro via developer strap? Or is it possible to connect to a 10G Ethernet ad, and then using Ethernet to connect to a disk/NAS and attach the drive via ip?
I have a ModelEntity with GroundingShadowComponent
entity.enumerateHierarchy { child, stop in
child.components.set(GroundingShadowComponent(castsShadow: true))
}
When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough.
Is there any way to disable this boundary or extend its radius?
This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit.
If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments?
Any insights would be greatly appreciated!
Hi I’m using Vision Pro m5 and developer strap 2. When I connect it to my Mac, it still shows 480M… all systems are using latest firmware…
Anyone knows why?
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV?
APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical.
If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency?
Thank you for any insights.
Hello Apple team and developer community,
I am preparing a visionOS app for a fair environment, where we want to automatically stream the current experience to a nearby monitor via AirPlay, without requiring guests or staff to manually interact with the Control Center or AirPlay pickers all the time.
The goal is to provide a smooth, frictionless setup so attendees can focus on the demo, not the configuration.
Feature Request:
A supported API or method to programmatically start/stop AirPlay video streaming (mirroring or external playback) from within a visionOS app, allowing the current experience to be instantly displayed on an external monitor or Apple TV for the audience.
Context & Rationale:
In a trade fair or exhibition setting, rapid guest turnaround and minimal staff intervention are crucial. Having to manually guide each visitor through AirPlay setup is impractical.
As I understood, AVRoutePickerView can be used for this on iOS/macOS, but this is not available in visionOS. Enabling similar automated streaming on visionOS would make the device far more suitable for live demos and public showcases.
Questions:
Are there any supported workarounds or best practices for enabling automated screen streaming or AirPlay initiation on visionOS in public demo environments that I missed?
Is Apple considering adding programmatic AirPlay control or accessibility features to support such use cases in future visionOS releases?
Thank you for considering this request! If there are recommended patterns, entitlements, or accessibility solutions we could explore for trade fair scenarios, your guidance would be greatly appreciated.
Best regards,
Julian Zürn - IPI, HS Kempten
How can I renew visionOS Enterprise API?
I've spent so much time contacting Apple Developer Support. They said they don't know the renewal process either and are "checking with the internal operations team" - but it's been 2 months with no updates.
The official documentation (https://developer.apple.com/documentation/visionOS/building-spatial-experiences-for-business-apps-with-enterprise-apis#Request-the-entitlements) says:
"The license file comes with an expiration date, so you need to renew it before then to ensure your entitlements continue to function."
But it doesn't explain HOW to renew.
When I asked Apple Support about this, they told me:
"After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions."
But I never received any instructions when I was first approved, and I still don't know how to renew. There's also no direct way to contact the Enterprise API team.
Now my visionOS Enterprise API license has been expired for 2 months. I submitted a renewal request, but I still haven't heard anything back. Is it normal to take more than 2 months for approval? Any advice or shared experiences would be really helpful.
Thanks!